32 research outputs found

    Appraisal of open software for finite element simulation of 2D metal sheet laser cut.

    Get PDF
    FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects

    Edge and corner identification for tracking the line of sight

    Get PDF
    This article presents an edge-corner detector, implemented in the realm of the GEIST project (an Computer Aided Touristic Information System) to extract the information of straight edges and their intersections (image corners) from camera-captured (real world) and computer-generated images (from the database of Historical Monuments, using ob- server position and orientation data). Camera and computer-generated images are processed for reduction of detail, skeletonization and corner-edge detection. The corners surviving the detection and skeletonization process from both images are treated as landmarks and fed to a matching algorithm, which estimates the sampling errors which usually contaminate GPS and pose tracking data (fed to the computer-image generatator).PACS: 07.05.PjMSC: 68Uxx, 68U05, 68U10Este artículo presenta un detector de aristas y esquinas, implementado en el dominio del proyecto GEIST (un Sistema de Información Turística Asistido por Computador) para extraer la información de aristas rectas y sus intersecciones (esquinas en la imagen) a partir de imágenes de cámara (del mundo real) contrastadas con imágenes generadas por computador (de la Base de Datos de Monumentos Históricos a partir de posición y orientación de un observador virtual). Las imágenes de la cámara y las generadas por computador son procesadas para reducir detalle, hallar el esqueleto de la imagen y detectar aristas y esquinas. Las esquinas sobrevivientes del proceso de detección y hallazgo del esqueleto de las imágenes son tratados como puntos referentes y alimentados a un algoritmo de puesta en correspondencia, el cual estima los errores de muestreo que usualmente contaminan los datos de GPS y orientación (alimentados al generador de imágenes por computador). De esta manera, un ciclo de control de lazo cerrado se implementa, por medio del cual el sistema converge a la determinación exacta de posición y orientación de un observador atravesando un escenario histórico (en este caso, la ciudad de Heidelberg). Con esta posición y orientación exactas, en el proyecto GEIST otros módulos son capaces de proyectar re-creaciones históricas en el campo de visión del observador, las cuales tienen el escenario exacto (la imagen real vista por el observador). Así, el turista “ve” las escenas desarrollándose en sitios históricos materiales y reales de la ciudad. Para ello, este artículo presenta la modificación y articulación de algoritmos tales como el Canny Edge Detector, “SUSAN Corner detector”, filtros 1- y 2-dimensionales, etcétera.PACS: 07.05.PjMSC: 68Uxx, 68U05, 68U1

    Upper Limb Posture Estimation in Robotic and Virtual Reality-based Rehabilitation.

    Get PDF
    New motor rehabilitation therapies include virtual reality (VR) and robotic technologies. In limb rehabilitation, limb posture is required to (1) provide a limb realistic representation in VR games and (2) assess the patient improvement. When exoskeleton devices are used in the therapy, the measurements of their joint angles cannot be directly used to represent the posture of the patient limb, since the human and exoskeleton kinematic models differ. In response to this shortcoming, we propose a method to estimate the posture of the human limb attached to the exoskeleton. We use the exoskeleton joint angles measurements and the constraints of the exoskeleton on the limb to estimate the human limb joints angles. This paper presents (a) the mathematical formulation and solution to the problem, (b) the implementation of the proposed solution on a commercial exoskeleton system for the upper limb rehabilitation, (c) its integration into a rehabilitation VR game platform, and (d) the quantitative assessment of the method during elbow and wrist analytic training. Results show that this method properly estimates the limb posture to (i) animate avatars that represent the patient in VR games and (ii) obtain kinematic data for the patient assessment during elbow and wrist analytic rehabilitation

    Spectral-based mesh segmentation

    Get PDF
    In design and manufacturing, mesh segmentation is required for FACE construction in boundary representation (BRep), which in turn is central for featurebased design, machining, parametric CAD and reverse engineering, among others -- Although mesh segmentation is dictated by geometry and topology, this article focuses on the topological aspect (graph spectrum), as we consider that this tool has not been fully exploited -- We preprocess the mesh to obtain a edgelength homogeneous triangle set and its Graph Laplacian is calculated -- We then produce a monotonically increasing permutation of the Fiedler vector (2nd eigenvector of Graph Laplacian) for encoding the connectivity among part feature submeshes -- Within the mutated vector, discontinuities larger than a threshold (interactively set by a human) determine the partition of the original mesh -- We present tests of our method on large complex meshes, which show results which mostly adjust to BRep FACE partition -- The achieved segmentations properly locate most manufacturing features, although it requires human interaction to avoid over segmentation -- Future work includes an iterative application of this algorithm to progressively sever features of the mesh left from previous submesh removal

    The Changing Landscape for Stroke\ua0Prevention in AF: Findings From the GLORIA-AF Registry Phase 2

    Get PDF
    Background GLORIA-AF (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation) is a prospective, global registry program describing antithrombotic treatment patterns in patients with newly diagnosed nonvalvular atrial fibrillation at risk of stroke. Phase 2 began when dabigatran, the first non\u2013vitamin K antagonist oral anticoagulant (NOAC), became available. Objectives This study sought to describe phase 2 baseline data and compare these with the pre-NOAC era collected during phase 1. Methods During phase 2, 15,641 consenting patients were enrolled (November 2011 to December 2014); 15,092 were eligible. This pre-specified cross-sectional analysis describes eligible patients\u2019 baseline characteristics. Atrial fibrillation disease characteristics, medical outcomes, and concomitant diseases and medications were collected. Data were analyzed using descriptive statistics. Results Of the total patients, 45.5% were female; median age was 71 (interquartile range: 64, 78) years. Patients were from Europe (47.1%), North America (22.5%), Asia (20.3%), Latin America (6.0%), and the Middle East/Africa (4.0%). Most had high stroke risk (CHA2DS2-VASc [Congestive heart failure, Hypertension, Age  6575 years, Diabetes mellitus, previous Stroke, Vascular disease, Age 65 to 74 years, Sex category] score  652; 86.1%); 13.9% had moderate risk (CHA2DS2-VASc = 1). Overall, 79.9% received oral anticoagulants, of whom 47.6% received NOAC and 32.3% vitamin K antagonists (VKA); 12.1% received antiplatelet agents; 7.8% received no antithrombotic treatment. For comparison, the proportion of phase 1 patients (of N = 1,063 all eligible) prescribed VKA was 32.8%, acetylsalicylic acid 41.7%, and no therapy 20.2%. In Europe in phase 2, treatment with NOAC was more common than VKA (52.3% and 37.8%, respectively); 6.0% of patients received antiplatelet treatment; and 3.8% received no antithrombotic treatment. In North America, 52.1%, 26.2%, and 14.0% of patients received NOAC, VKA, and antiplatelet drugs, respectively; 7.5% received no antithrombotic treatment. NOAC use was less common in Asia (27.7%), where 27.5% of patients received VKA, 25.0% antiplatelet drugs, and 19.8% no antithrombotic treatment. Conclusions The baseline data from GLORIA-AF phase 2 demonstrate that in newly diagnosed nonvalvular atrial fibrillation patients, NOAC have been highly adopted into practice, becoming more frequently prescribed than VKA in Europe and North America. Worldwide, however, a large proportion of patients remain undertreated, particularly in Asia and North America. (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients With Atrial Fibrillation [GLORIA-AF]; NCT01468701

    Geometric constraint subsets and subgraphs in the analysis of assemblies and mechanisms

    Get PDF
    Geometric Reasoning ability is central to many applications in CAD/CAM/ CAPP environments. An increasing demand exists for Geometric Reasoning systems which evaluate the feasibility of virtual scenes specified by geometric relations. Thus, the Geometric Constraint Satisfaction or Scene Feasibility (GCS/SF) problem consists of a basic scenario containing geometric entities, whose context is used to propose constraining relations among still undefined entities. If the constraint specification is consistent, the answer of the problem is one of finitely or infinitely many solution scenarios satisfying the prescribed constraints. Otherwise, a diagnostic of inconsistency is expected. The three main approaches used for this problem are numerical, procedural or operational and mathematical. Numerical and procedural approaches answer only part of the problem, and are not complete in the sense that a failure to provide an answer does not preclude the existence of one. The mathematical approach previously presented by the authors describes the problem using a set of polynomial equations. The common roots to this set of polynomials characterizes the solution space for such a problem. That work presents the use of Groebner basis techniques for verifying the consistency of the constraints. It also integrates subgroups of the Special Euclidean Group of Displacements SE(3) in the problem formulation to exploit the structure implied by geometric relations. Although theoretically sound, these techniques require large amounts of computing resources. This work proposes Divide-and-Conquer techniques applied to local GCS/SF subproblems to identify strongly constrained clusters of geometric entities. The identification and preprocessing of these clusters generally reduces the effort required in solving the overall problem. Cluster identification can be related to identifying short cycles in the Spatial Constraint graph for the GCS/SF problem. Their preprocessing uses the aforementioned Algebraic Geometry and Group theoretical techniques on the local GCS/SF problems that correspond to these cycles. Besides improving the efficiency of the solution approach, the Divide-and-Conquer techniques capture the physical essence of the problem. This is illustrated by applying the discussed techniques to the analysis of the degrees of freedom of mechanisms

    Triangle mesh skeletonization using non-deterministic voxel thinning and graph spectrum segmentation

    No full text
    In the context of shape processing, the estimation of the medial axis is relevant for the simplification and re-parameterization of 3D bodies. The currently used methods are based on (1) General fields, (2) Geometric methods and (3) voxel-based thinning. They present shortcomings such as (1) overrepresentation and non-smoothness of the medial axis due to high frequency nodes and (2) biased-skeletons due to skewed thinning. To partially overcome these limitations, this article presents a non-deterministic algorithm for the estimation of the 1D skeleton of triangular B-Reps or voxel-based body representations. Our method articulates (1) a novel randomized thinning algorithm that avoids possible skewings in the final skeletonization, (2) spectral-based segmentation that eliminates short dead-end branches, and (3) a maximal excursion method for reduction of high frequencies. The test results show that the randomized order in the removal of the instantaneous skin of the solid region eliminates bias of the skeleton, thus respecting features of the initial solid. An Alpha Shape-based inversion of the skeleton encoding results in triangular boundary Representations of the original body, which present reasonable quality for fast non-minute scenes. Future work is needed to (a) tune the spectral filtering of high frequencies off the basic skeleton and (b) extend the algorithm to solid regions whose skeletons mix 1D and 2D entities

    Identificación de bordes y esquinas para rastrear la línea de visión

    No full text
    This article presents an edge and corner detector, implemented in the GEIST project domain (a Computer Aided Tourist Information System) to extract information from straight edges and their intersections (corners in the image) from camera images ( from the real world) contrasted with computer-generated images (from the Historical Monuments Database based on the position and orientation of a virtual observer). The camera and computer generated images are processed to reduce detail, find the skeleton of the image and detect edges and corners. The surviving corners of the detection and discovery process of the skeleton of the images are treated as reference points and fed to a matching algorithm, which estimates the sampling errors that usually contaminate the GPS and orientation data (fed to the generator images per computer). In this way, a closed loop control cycle is implemented, by means of which the system converges to the exact determination of position and orientation of an observer crossing a historical scenario (in this case, the city of Heidelberg). With this exact position and orientation, in the GEIST project other modules are capable of projecting historical re-creations in the observer's field of vision, which have the exact scenario (the real image seen by the observer). Thus, the tourist "sees" the scenes unfolding in material and real historical sites of the city. To do this, this article presents the modification and articulation of algorithms such as the Canny Edge Detector, "SUSAN Corner Detector", 1- and 2-dimensional filters, and so on.Este artículo presenta un detector de aristas y esquinas, implementado en el dominio del proyecto GEIST (un Sistema de Información Turística Asistido por Computador) para extraer la información de aristas rectas y sus intersecciones (esquinas en la imagen) a partir de imágenes de cámara (del mundo real) contrastadas con imágenes generadas por computador (de la Base de Datos de Monumentos Históricos a partir de posición y orientación de un observador virtual). Las imágenes de la cámara y las generadas por computador son procesadas para reducir detalle, hallar el esqueleto de la imagen y detectar aristas y esquinas. Las esquinas sobrevivientes del proceso de detección y hallazgo del esqueleto de las imágenes son tratados como puntos referentes y alimentados a un algoritmo de puesta en correspondencia, el cual estima los errores de muestreo que usualmente contaminan los datos de GPS y orientación (alimentados al generador de imágenes por computador). De esta manera, un ciclo de control de lazo cerrado se implementa, por medio del cual el sistema converge a la determinación exacta de posición y orientación de un observador atravesando un escenario histórico (en este caso, la ciudad de Heidelberg). Con esta posición y orientación exactas, en el proyecto GEIST otros módulos son capaces de proyectar re-creaciones históricas en el campo de visión del observador, las cuales tienen el escenario exacto (la imagen real vista por el observador). Así, el turista “ve” las escenas desarrollándose en sitios históricos materiales y reales de la ciudad. Para ello, este artículo presenta la modificación y articulación de algoritmos tales como el Canny Edge Detector, “SUSAN Corner detector”, filtros 1- y 2-dimensionales, etcétera

    Edge and corner identification for tracking the line of sight

    No full text
    This article presents an edge-corner detector, implemented in the realm of the GEIST project (an Computer Aided Touristic Information System) to extract the information of straight edges and their intersections (image corners) from camera-captured (real world) and computer-generated images (from the database of Historical Monuments, using ob- server position and orientation data). Camera and computer-generated images are processed for reduction of detail, skeletonization and corner-edge detection. The corners surviving the detection and skeletonization process from both images are treated as landmarks and fed to a matching algorithm, which estimates the sampling errors which usually contaminate GPS and pose tracking data (fed to the computer-image generatator)

    Edge and corner identification for tracking the line of sight

    No full text
    This article presents an edge-corner detector, implemented in the realm of the GEIST project (an Computer Aided Touristic Information System) to extract the information of straight edges and their intersections (image corners) from camera-captured (real world) and computer-generated images (from the database of Historical Monuments, using ob- server position and orientation data). Camera and computer-generated images are processed for reduction of detail, skeletonization and corner-edge detection. The corners surviving the detection and skeletonization process from both images are treated as landmarks and fed to a matching algorithm, which estimates the sampling errors which usually contaminate GPS and pose tracking data (fed to the computer-image generatator)
    corecore